167 research outputs found

    The Simple Publishing Interface (SPI)

    Get PDF
    Ternier, S., Massart, D., Totschnig, M., Klerkx, J., & Duval, E. (2010). The Simple Publishing Interface (SPI). D-Lib Magazine, September/October 2010, Volume 16 Number 9/10, doi:10.1045/september2010-ternierThe Simple Publishing Interface (SPI) is a new publishing protocol, developed under the auspices of the European Committee for Standardization (CEN) workshop on learning technologies. This protocol aims to facilitate the communication between content producing tools and repositories that persistently manage learning resources and metadata. The SPI work focuses on two problems: (1) facilitating the metadata and resource publication process (publication in this context refers to the ability to ingest metadata and resources); and (2) enabling interoperability between various components in a federation of repositories. This article discusses the different contexts where a protocol for publishing resources is relevant. SPI contains an abstract domain model and presents several methods that a repository can support. An Atom Publishing Protocol binding is proposed that allows for implementing SPI with a concrete technology and enables interoperability between applications.European Committee for Standardization (CEN), CEN/Expert/2009/3

    Bluetongue Virus in Wild Deer, Belgium, 2005–2008

    Get PDF
    To investigate bluetongue virus serotype 8 infection in Belgium, we conducted a virologic and serologic survey on 2,416 free-ranging cervids during 2005–2008. Infection emerged in 2006 and spread over the study area in red deer, but not in roe deer

    On the sequential massart algorithm for statistical model checking

    Get PDF
    Several schemes have been provided in Statistical Model Checking (SMC) for the estimation of property occurrence based on predefined confidence and absolute or relative error. Simulations might be however costly if many samples are required and the usual algorithms implemented in statistical model checkers tend to be conservative. Bayesian and rare event techniques can be used to reduce the sample size but they can not be applied without prerequisite or knowledge about the system under scrutiny. Recently, sequential algorithms based on Monte Carlo estimations and Massart bounds have been proposed to reduce the sample size while providing guarantees on error bounds which has been shown to outperform alternative frequentist approaches [15]. In this work, we discuss some features regarding the distribution and the optimisation of these algorithms.No Full Tex

    <i>Gaia</i> Data Release 1. Summary of the astrometric, photometric, and survey properties

    Get PDF
    Context. At about 1000 days after the launch of Gaia we present the first Gaia data release, Gaia DR1, consisting of astrometry and photometry for over 1 billion sources brighter than magnitude 20.7. Aims. A summary of Gaia DR1 is presented along with illustrations of the scientific quality of the data, followed by a discussion of the limitations due to the preliminary nature of this release. Methods. The raw data collected by Gaia during the first 14 months of the mission have been processed by the Gaia Data Processing and Analysis Consortium (DPAC) and turned into an astrometric and photometric catalogue. Results. Gaia DR1 consists of three components: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the HIPPARCOS and Tycho-2 catalogues – a realisation of the Tycho-Gaia Astrometric Solution (TGAS) – and a secondary astrometric data set containing the positions for an additional 1.1 billion sources. The second component is the photometric data set, consisting of mean G-band magnitudes for all sources. The G-band light curves and the characteristics of ∼3000 Cepheid and RR-Lyrae stars, observed at high cadence around the south ecliptic pole, form the third component. For the primary astrometric data set the typical uncertainty is about 0.3 mas for the positions and parallaxes, and about 1 mas yr−1 for the proper motions. A systematic component of ∼0.3 mas should be added to the parallax uncertainties. For the subset of ∼94 000 HIPPARCOS stars in the primary data set, the proper motions are much more precise at about 0.06 mas yr−1. For the secondary astrometric data set, the typical uncertainty of the positions is ∼10 mas. The median uncertainties on the mean G-band magnitudes range from the mmag level to ∼0.03 mag over the magnitude range 5 to 20.7. Conclusions. Gaia DR1 is an important milestone ahead of the next Gaia data release, which will feature five-parameter astrometry for all sources. Extensive validation shows that Gaia DR1 represents a major advance in the mapping of the heavens and the availability of basic stellar data that underpin observational astrophysics. Nevertheless, the very preliminary nature of this first Gaia data release does lead to a number of important limitations to the data quality which should be carefully considered before drawing conclusions from the data

    A new approach to digitized cognitive monitoring: validity of the SelfCog in Huntington's disease

    Get PDF
    Cognitive deficits represent a hallmark of neurodegenerative diseases, but evaluating their progression is complex. Most current evaluations involve lengthy paper-and-pencil tasks which are subject to learning effects dependent on the mode of response (motor or verbal), the countries’ language or the examiners. To address these limitations, we hypothesized that applying neuroscience principles may offer a fruitful alternative. We thus developed the SelfCog, a digitized battery that tests motor, executive, visuospatial, language and memory functions in 15 min. All cognitive functions are tested according to the same paradigm, and a randomization algorithm provides a new test at each assessment with a constant level of difficulty. Here, we assessed its validity, reliability and sensitivity to detect decline in early-stage Huntington’s disease in a prospective and international multilingual study (France, the UK and Germany). Fifty-one out of 85 participants with Huntington’s disease and 40 of 52 healthy controls included at baseline were followed up for 1 year. Assessments included a comprehensive clinical assessment battery including currently standard cognitive assessments alongside the SelfCog. We estimated associations between each of the clinical assessments and SelfCog using Spearman’s correlation and proneness to retest effects and sensitivity to decline through linear mixed models. Longitudinal effect sizes were estimated for each cognitive score. Voxel-based morphometry and tract-based spatial statistics analyses were conducted to assess the consistency between performance on the SelfCog and MRI 3D-T1 and diffusion-weighted imaging in a subgroup that underwent MRI at baseline and after 12 months. The SelfCog detected the decline of patients with Huntington’s disease in a 1-year follow-up period with satisfactory psychometric properties. Huntington’s disease patients are correctly differentiated from controls. The SelfCog showed larger effect sizes than the classical cognitive assessments. Its scores were associated with grey and white matter damage at baseline and over 1 year. Given its good performance in longitudinal analyses of the Huntington’s disease cohort, it should likely become a very useful tool for measuring cognition in Huntington’s disease in the future. It highlights the value of moving the field along the neuroscience principles and eventually applying them to the evaluation of all neurodegenerative diseases

    Cognitive decline in Huntington's disease in the Digitalized Arithmetic Task (DAT)

    Get PDF
    Background Efficient cognitive tasks sensitive to longitudinal deterioration in small cohorts of Huntington’s disease (HD) patients are lacking in HD research. We thus developed and assessed the digitized arithmetic task (DAT), which combines inner language and executive functions in approximately 4 minutes. Methods We assessed the psychometric properties of DAT in three languages, across four European sites, in 77 early-stage HD patients (age: 52 ± 11 years; 27 females), and 57 controls (age: 50 ± 10, 31 females). Forty-eight HD patients and 34 controls were followed up to one year with 96 participants who underwent MRI brain imaging (HD patients = 46) at baseline and 50 participants (HD patients = 22) at one year. Linear mixed models and Pearson correlations were used to assess associations with clinical assessment. Results At baseline, HD patients were less accurate (p = 0.0002) with increased response time (p<0.0001) when compared to DAT in controls. Test-retest reliability in HD patients ranged from good to excellent for response time (range: 0.63–0.79) and from questionable to acceptable for accuracy (range: r = 0.52–0.69). Only DAT, the Mattis Dementia Rating Scale, the Symbol Digit Modalities Test, and Total Functional Capacity scores were able to detect a decline within a one-year follow-up in HD patients (all p< 0.05). In contrast with all the other cognitive tasks, DAT correlated with striatal atrophy over time (p = 0.037) but not with motor impairment. Conclusions DAT is fast, reliable, motor-free, applicable in several languages, and able to unmask cognitive decline correlated with striatal atrophy in small cohorts of HD patients. This likely makes it a useful endpoint in future trials for HD and other neurodegenerative diseases

    Un espace de formation francophone dédié à l'apprentissage de l'informatique

    Get PDF
    National audienceL’introduction de l’enseignement de l’informatique au lycée va permettre aux prochaines générations de maîtriser et participer au développement du numérique. Le principal enjeu est alors la formation des enseignantes et des enseignants. Comment relever un tel défi ?D’abord en faisant communauté d’apprentissage et de pratique : depuis des semaines déjà en 2021 l’AEIF et le projet CAI contribuent à l’accueil et l’entraide de centaines de collègues en activité ou en formation, discutant de tous les sujets, partageant des ressources sur un forum dédié et des listes de discussions.Puis, depuis début 2022, en offrant deux formations en ligne :- Une formation aux fondamentaux de l’informatique, avec un ordre de grandeur de 200 heures de travail, avec les ressources de formation d’initiation et de perfectionnement. Plus qu’un simple "MOOC", ce sont les ressources d’une formation complète, et un accompagnement prévu pour permettre de bien les utiliser.- Une formation pour apprendre à enseigner… par la pratique, en co-préparant les activités pédagogiques des cours à venir, en partageant des pratiques didactiques et en prenant un recul pédagogique, y compris du point de vue de la pédagogie de l’égalité.Les personnes désireuses de se préparer au CAPES y trouveront aussi des conseils et des pistes de travail.Si vous n’avez pas envie d’être seul·e relativement à cet enseignement de l’informatique et être accompagné dans les trois ans qui viennent, passez nous voir ce jour-là

    Objectively characterizing Huntington's disease using a novel upper limb dexterity test.

    Get PDF
    Background:The Clinch Token Transfer Test (C3t) is a bi-manual coin transfer task that incorporates cognitive tasks to add complexity. This study explored the concurrent and convergent validity of the C3t as a simple, objective assessment of impairment that is reflective of disease severity in Huntington's, that is not reliant on clinical expertise for administration. Methods:One-hundred-and-five participants presenting with pre-manifest (n = 16) or manifest (TFC-Stage-1 n = 39; TFC-Stage-2 n = 43; TFC-Stage-3 n = 7) Huntington's disease completed the Unified Huntington's Disease Rating Scale and the C3t at baseline. Of these, thirty-three were followed up after 12 months. Regression was used to estimate baseline individual and composite clinical scores (including cognitive, motor, and functional ability) using baseline C3t scores. Correlations between C3t and clinical scores were assessed using Spearman's R and visually inspected in relation to disease severity using scatterplots. Effect size over 12 months provided an indication of longitudinal behaviour of the C3t in relation to clinical measures.Results: Baseline C3t scores predicted baseline clinical scores to within 9-13% accuracy, being associated with individual and composite clinical scores. Changes in C3t scores over 12 months were small ([Formula: see text] ≤ 0.15) and mirrored the change in clinical scores. Conclusion: The C3t demonstrates promise as a simple, easy to administer, objective outcome measure capable of predicting impairment that is reflective of Huntington's disease severity and offers a viable solution to support remote clinical monitoring. It may also offer utility as a screening tool for recruitment to clinical trials given preliminary indications of association with the prognostic index normed for Huntington's disease

    The endocrine tumor summit 2008: appraising therapeutic approaches for acromegaly and carcinoid syndrome

    Get PDF
    The Endocrine Tumor Summit convened in December 2008 to address 6 statements prepared by panel members that reflect important questions in the treatment of acromegaly and carcinoid syndrome. Data pertinent to each of the statements were identified through review of pertinent literature by one of the 9-member panel, enabling a critical evaluation of the statements and the evidence supporting or refuting them. Three statements addressed the validity of serum growth hormone (GH) and insulin-like growth factor-I (IGF-I) concentrations as indicators or predictors of disease in acromegaly. Statements regarding the effects of preoperative somatostatin analog use on pituitary surgical outcomes, their effects on hormone and symptom control in carcinoid syndrome, and the efficacy of extended dosing intervals were reviewed. Panel opinions, based on the level of available scientific evidence, were polled. Finally, their views were compared with those of surveyed community-based endocrinologists and neurosurgeons
    corecore